对象检测是各种关键计算机视觉任务的基础,例如分割,对象跟踪和事件检测。要以令人满意的精度训练对象探测器,需要大量数据。但是,由于注释大型数据集涉及大量劳动力,这种数据策展任务通常被外包给第三方或依靠志愿者。这项工作揭示了此类数据策展管道的严重脆弱性。我们提出MACAB,即使数据策展人可以手动审核图像,也可以将干净的图像制作清洁的图像将后门浸入对象探测器中。我们观察到,当后门被不明确的天然物理触发器激活时,在野外实现了错误分类和披肩的后门效应。与带有清洁标签的现有图像分类任务相比,带有清洁通道的非分类对象检测具有挑战性,这是由于每个帧内有多个对象的复杂性,包括受害者和非视野性对象。通过建设性地滥用深度学习框架使用的图像尺度函数,II结合了所提出的对抗性清洁图像复制技术,以及在考虑到毒品数据选择标准的情况下,通过建设性地滥用图像尺度尺度,可以确保MACAB的功效。广泛的实验表明,在各种现实世界中,MacAB在90%的攻击成功率中表现出超过90%的攻击成功率。这包括披肩和错误分类后门效应,甚至限制了较小的攻击预算。最先进的检测技术无法有效地识别中毒样品。全面的视频演示位于https://youtu.be/ma7l_lpxkp4上,该演示基于yolov4倒置的毒药率为0.14%,yolov4 clokaking后门和更快的速度R-CNN错误分类后门。
translated by 谷歌翻译
电子邮件网络钓鱼变得越来越普遍,随着时间的流逝,网络钓鱼变得更加复杂。为了打击这一上升,已经开发了许多用于检测网络钓鱼电子邮件的机器学习(ML)算法。但是,由于这些算法训练的电子邮件数据集有限,因此它们不擅长识别各种攻击,因此遭受了概念漂移的困扰。攻击者可以在其电子邮件或网站的统计特征上引入小小的变化,以成功绕过检测。随着时间的流逝,文献所报告的准确性与算法在现实世界中的实际有效性之间存在差距。这以频繁的假阳性和假阴性分类意识到自己。为此,我们建议对电子邮件进行多维风险评估,以减少攻击者调整电子邮件并避免检测的可行性。这种横向发送网络钓鱼检测配置文件的水平方法在其主要功能上发出了传入的电子邮件。我们开发了一个风险评估框架,其中包括三个模型,分析了电子邮件(1)威胁级别,(2)认知操纵和(3)电子邮件类型,我们合并了这些电子邮件类型以返回最终的风险评估评分。剖面人员不需要大量的数据集进行训练以有效,其对电子邮件功能的分析会减少概念漂移的影响。我们的参考器可以与ML方法结合使用,以减少其错误分类或作为培训阶段中大型电子邮件数据集的标签。我们在9000个合法的数据集中,使用最先进的ML算法评估了剖面人员对机器学习合奏的功效,并从一个大型澳大利亚大型研究组织的900个网络钓鱼电子邮件中进行了效力。我们的结果表明,探查者的概念漂移的影响减少了30%的假阳性,对ML合奏方法的虚假负面电子邮件分类少25%。
translated by 谷歌翻译
作为网络防御的重要工具,欺骗正在迅速发展,并补充了现有的周边安全措施,以迅速检测出漏洞和数据盗窃。限制欺骗使用的因素之一是手工生成逼真的人工制品的成本。但是,机器学习的最新进展为可扩展的,自动化的现实欺骗创造了机会。本愿景论文描述了开发模型所涉及的机会和挑战,以模仿IT堆栈的许多共同元素以造成欺骗效应。
translated by 谷歌翻译
基于变压器的大型语言模型在自然语言处理中表现出色。通过考虑这些模型在一个领域中获得的知识的可传递性,以及自然语言与高级编程语言(例如C/C ++)的亲密关系,这项工作研究了如何利用(大)基于变压器语言模型检测软件漏洞以及这些模型在漏洞检测任务方面的良好程度。在这方面,首先提出了一个系统的(凝聚)框架,详细介绍了源代码翻译,模型准备和推理。然后,使用具有多个漏洞的C/C ++源代码的软件漏洞数据集进行经验分析,该数据集对应于库功能调用,指针使用,数组使用情况和算术表达式。我们的经验结果证明了语言模型在脆弱性检测中的良好性能。此外,这些语言模型具有比当代模型更好的性能指标,例如F1得分,即双向长期记忆和双向封闭式复发单元。由于计算资源,平台,库和依赖项的要求,对语言模型进行实验始终是具有挑战性的。因此,本文还分析了流行的平台,以有效地微调这些模型并在选择平台时提出建议。
translated by 谷歌翻译
Existing integrity verification approaches for deep models are designed for private verification (i.e., assuming the service provider is honest, with white-box access to model parameters). However, private verification approaches do not allow model users to verify the model at run-time. Instead, they must trust the service provider, who may tamper with the verification results. In contrast, a public verification approach that considers the possibility of dishonest service providers can benefit a wider range of users. In this paper, we propose PublicCheck, a practical public integrity verification solution for services of run-time deep models. PublicCheck considers dishonest service providers, and overcomes public verification challenges of being lightweight, providing anti-counterfeiting protection, and having fingerprinting samples that appear smooth. To capture and fingerprint the inherent prediction behaviors of a run-time model, PublicCheck generates smoothly transformed and augmented encysted samples that are enclosed around the model's decision boundary while ensuring that the verification queries are indistinguishable from normal queries. PublicCheck is also applicable when knowledge of the target model is limited (e.g., with no knowledge of gradients or model parameters). A thorough evaluation of PublicCheck demonstrates the strong capability for model integrity breach detection (100% detection accuracy with less than 10 black-box API queries) against various model integrity attacks and model compression attacks. PublicCheck also demonstrates the smooth appearance, feasibility, and efficiency of generating a plethora of encysted samples for fingerprinting.
translated by 谷歌翻译
网络欺骗是作为对攻击者和数据盗贼保卫网络和系统的有希望的方法。然而,尽管部署相对便宜,但由于丰富的互动欺骗技术在很大程度上被手动的事实,规模的现实内容的产生是非常昂贵的。随着最近的机器学习改进,我们现在有机会为创建逼真和诱惑模拟内容带来规模和自动化。在这项工作中,我们提出了一个框架,以便在规模上自动化电子邮件和即时消息风格组通信。组织内的这种消息传递平台包含私人通信和文档附件内的许多有价值的信息,使其成为对手的诱惑目标。我们解决了模拟此类系统的两个关键方面:与参与者进行沟通的何时何地和生成局部多方文本以填充模拟对话线程。我们将LognormMix-Net时间点流程作为一种方法,建立在Shchur等人的强度建模方法上。〜\ Cite {Shchur2019Ints}为单播和多铸造通信创建生成模型。我们展示了使用微调,预先训练的语言模型来生成令人信服的多方对话线程。通过将LognormMix-Net TPP(要生成通信时间戳,发件人和收件人)使用语言模型来模拟实时电子邮件服务器,该语言模型生成多方电子邮件线程的内容。我们对基于现实主义的数量的基于现实的属性评估生成的内容,这鼓励模型学会生成将引起对手的注意力来实现欺骗结果。
translated by 谷歌翻译
Spear Phishing is a harmful cyber-attack facing business and individuals worldwide. Considerable research has been conducted recently into the use of Machine Learning (ML) techniques to detect spear-phishing emails. ML-based solutions may suffer from zero-day attacks; unseen attacks unaccounted for in the training data. As new attacks emerge, classifiers trained on older data are unable to detect these new varieties of attacks resulting in increasingly inaccurate predictions. Spear Phishing detection also faces scalability challenges due to the growth of the required features which is proportional to the number of the senders within a receiver mailbox. This differs from traditional phishing attacks which typically perform only a binary classification between phishing and benign emails. Therefore, we devise a possible solution to these problems, named RAIDER: Reinforcement AIded Spear Phishing DEtectoR. A reinforcement-learning based feature evaluation system that can automatically find the optimum features for detecting different types of attacks. By leveraging a reward and penalty system, RAIDER allows for autonomous features selection. RAIDER also keeps the number of features to a minimum by selecting only the significant features to represent phishing emails and detect spear-phishing attacks. After extensive evaluation of RAIDER over 11,000 emails and across 3 attack scenarios, our results suggest that using reinforcement learning to automatically identify the significant features could reduce the dimensions of the required features by 55% in comparison to existing ML-based systems. It also improves the accuracy of detecting spoofing attacks by 4% from 90% to 94%. In addition, RAIDER demonstrates reasonable detection accuracy even against a sophisticated attack named Known Sender in which spear-phishing emails greatly resemble those of the impersonated sender.
translated by 谷歌翻译
A recent trojan attack on deep neural network (DNN) models is one insidious variant of data poisoning attacks. Trojan attacks exploit an effective backdoor created in a DNN model by leveraging the difficulty in interpretability of the learned model to misclassify any inputs signed with the attacker's chosen trojan trigger. Since the trojan trigger is a secret guarded and exploited by the attacker, detecting such trojan inputs is a challenge, especially at run-time when models are in active operation. This work builds STRong Intentional Perturbation (STRIP) based run-time trojan attack detection system and focuses on vision system. We intentionally perturb the incoming input, for instance by superimposing various image patterns, and observe the randomness of predicted classes for perturbed inputs from a given deployed model-malicious or benign. A low entropy in predicted classes violates the input-dependence property of a benign model and implies the presence of a malicious input-a characteristic of a trojaned input. The high efficacy of our method is validated through case studies on three popular and contrasting datasets: MNIST, CIFAR10 and GTSRB. We achieve an overall false acceptance rate (FAR) of less than 1%, given a preset false rejection rate (FRR) of 1%, for different types of triggers. Using CIFAR10 and GTSRB, we have empirically achieved result of 0% for both FRR and FAR. We have also evaluated STRIP robustness against a number of trojan attack variants and adaptive attacks.
translated by 谷歌翻译
This paper revisits the work of Rauch et al. (1965) and develops a novel method for recursive maximum likelihood particle filtering for general state-space models. The new method is based on statistical analysis of incomplete observations of the systems. Score function and conditional observed information of the incomplete observations/data are introduced and their distributional properties are discussed. Some identities concerning the score function and information matrices of the incomplete data are derived. Maximum likelihood estimation of state-vector is presented in terms of the score function and observed information matrices. In particular, to deal with nonlinear state-space, a sequential Monte Carlo method is developed. It is given recursively by an EM-gradient-particle filtering which extends the work of Lange (1995) for state estimation. To derive covariance matrix of state-estimation errors, an explicit form of observed information matrix is proposed. It extends Louis (1982) general formula for the same matrix to state-vector estimation. Under (Neumann) boundary conditions of state transition probability distribution, the inverse of this matrix coincides with the Cramer-Rao lower bound on the covariance matrix of estimation errors of unbiased state-estimator. In the case of linear models, the method shows that the Kalman filter is a fully efficient state estimator whose covariance matrix of estimation error coincides with the Cramer-Rao lower bound. Some numerical examples are discussed to exemplify the main results.
translated by 谷歌翻译
在软件开发过程中,开发人员需要回答有关代码语义方面的查询。即使已经用神经方法进行了广泛的自然语言研究,但尚未探索使用神经网络对代码回答语义查询的问题。这主要是因为没有现有的数据集,具有提取性问答和答案对,涉及复杂概念和较长推理的代码。我们通过构建一个名为Codequeries的新的,策划的数据集并提出了一种关于代码的神经问题方法来弥合这一差距。我们基于最先进的预训练的代码模型,以预测答案和支持事实跨度。给定查询和代码,只有一些代码可能与回答查询有关。我们首先在理想的环境下进行实验,其中仅给出了模型的相关代码,并表明我们的模型做得很好。然后,我们在三个务实的考虑因素下进行实验:(1)扩展到大尺寸的代码,(2)从有限数量的示例中学习,(3)代码中对次要语法错误的鲁棒性。我们的结果表明,虽然神经模型可以抵御代码中的次要语法错误,代码的大小增加,与查询无关的代码的存在以及减少的培训示例数量限制了模型性能。我们正在释放数据和模型,以促进未来关于回答代码语义查询的问题的工作。
translated by 谷歌翻译